725 research outputs found

    Substructure and Boundary Modeling for Continuous Action Recognition

    Full text link
    This paper introduces a probabilistic graphical model for continuous action recognition with two novel components: substructure transition model and discriminative boundary model. The first component encodes the sparse and global temporal transition prior between action primitives in state-space model to handle the large spatial-temporal variations within an action class. The second component enforces the action duration constraint in a discriminative way to locate the transition boundaries between actions more accurately. The two components are integrated into a unified graphical structure to enable effective training and inference. Our comprehensive experimental results on both public and in-house datasets show that, with the capability to incorporate additional information that had not been explicitly or efficiently modeled by previous methods, our proposed algorithm achieved significantly improved performance for continuous action recognition.Comment: Detailed version of the CVPR 2012 paper. 15 pages, 6 figure

    Multiequilibria of Oligomeric Thermophilic DNA Replication Polymerases

    Get PDF
    DNA polymerases are essential enzymes in all domains of life for both DNA replication and repair. We examined the thermodynamics and enzymatic activity related to the oligomerization of hyperthermophilic archaeal Sulfolobus solfataricus (Sso) primary DNA replication polymerase (Dpo1) and lesion bypass polymerase (Dpo4). Both Dpo1 and Dpo4 bind to DNA with initial high affinity monomeric binding followed by sequential binding of additional molecules at higher concentrations of the enzyme. Gel filtration, chemical crosslinking, isothermal titration calorimetry (ITC) and fluorescence anisotropy experiments all show a stoichiometry of three Dpo1 and two-four Dpo4 molecules bound to a single DNA substrate. In particular, oligomeric Dpo1-DNA complexes significantly increase both the kinetic rate and processivity of DNA synthesis. Differentiation of binding accurate DNA replication polymerase Dpo1 over error prone DNA lesion bypass polymerase Dpo4 is essential for the proper maintenance of the genome. Binding discrimination between these polymerases on DNA templates is complicated by the fact that multiple oligomeric species are influenced by concentration and temperature. Fluorescence anisotropy experiments were used to separate discrete binding events for the formation of trimeric Dpo1 and dimeric Dpo4 complexes on DNA. The associated equilibria are found to be temperature-dependent, generally leading to more favorable binding at higher temperatures for both polymerases. At high temperatures, DNA binding of Dpo1 monomer is slightly favored over binding of Dpo4 monomer, but binding of Dpo1 trimer is strongly favored over binding of Dpo4 dimer, thus providing thermodynamic selection. The results from ITC showed an unusually strong temperature dependence of the change in heat capacity (∆C_p^o), which switches from positive to negative values with increasing temperature. The observed sign change in ∆C_p^o does not derive from temperature-dependent changes in structure, protonation, or electrostatics. Rather, we propose that temperature affects the coupled equilibria between self-associations of free Dpo1 or Dpo4 and their binding to DNA. Taken together, Sso differentiates between Dpo1 and Dpo4 binding to DNA by integrating molecular and cellular principles including concentration, temperature, oligomerization, and coupled equilibria to maintain uninterrupted, rapid, and high fidelity DNA replication

    A Study on the Effect of Design Factors of Slim Keyboard's Tactile Feedback

    Get PDF
    With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users' behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people

    A Study on the Effect of Design Factors of Slim Keyboard's Tactile Feedback

    Get PDF
    With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users' behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people

    WiRiS: Transformer for RIS-Assisted Device-Free Sensing for Joint People Counting and Localization using Wi-Fi CSI

    Full text link
    Channel State Information (CSI) is widely adopted as a feature for indoor localization. Taking advantage of the abundant information from the CSI, people can be accurately sensed even without equipped devices. However, the positioning error increases severely in non-line-of-sight (NLoS) regions. Reconfigurable intelligent surface (RIS) has been introduced to improve signal coverage in NLoS areas, which can re-direct and enhance reflective signals with massive meta-material elements. In this paper, we have proposed a Transformer-based RIS-assisted device-free sensing for joint people counting and localization (WiRiS) system to precisely predict the number of people and their corresponding locations through configuring RIS. A series of predefined RIS beams is employed to create inputs of fingerprinting CSI features as sequence-to-sequence learning database for Transformer. We have evaluated the performance of proposed WiRiS system in both ray-tracing simulators and experiments. Both simulation and real-world experiments demonstrate that people counting accuracy exceeds 90%, and the localization error can achieve the centimeter-level, which outperforms the existing benchmarks without employment of RIS

    Vec2Gloss: definition modeling leveraging contextualized vectors with Wordnet gloss

    Full text link
    Contextualized embeddings are proven to be powerful tools in multiple NLP tasks. Nonetheless, challenges regarding their interpretability and capability to represent lexical semantics still remain. In this paper, we propose that the task of definition modeling, which aims to generate the human-readable definition of the word, provides a route to evaluate or understand the high dimensional semantic vectors. We propose a `Vec2Gloss' model, which produces the gloss from the target word's contextualized embeddings. The generated glosses of this study are made possible by the systematic gloss patterns provided by Chinese Wordnet. We devise two dependency indices to measure the semantic and contextual dependency, which are used to analyze the generated texts in gloss and token levels. Our results indicate that the proposed `Vec2Gloss' model opens a new perspective to the lexical-semantic applications of contextualized embeddings

    Integrative model for the selection of a new product launch strategy, based on ANP, TOPSIS and MCGP: a case study

    Get PDF
    New product launch strategy is a key competitive advantage for a new product development. A new product launch is a multiple criteria decision-making problem, which involves evaluating different criteria or attributes in a strategy selection process. The purpose of this paper is to develop a qualitative and quantitative approach for the selection of a new product launch strategy. The current study proposes an integrated approach, integrating analytic network process, the technique for order preference by similarity to an ideal solution and multi-choice goal programming, which can be used to determine the best launch strategy for marketing problems. The advantage of this integrated method is that it enables the consideration of both tangible (qualitative) and intangible (quantitative) criteria as well as both “more/higher is better” (e.g., benefit criteria) and “less/lower is better” (e.g., cost criteria) in the launch strategy of a new product selection problem. To show the practicality and usefulness of this method, an empirical example of a watch company is demonstrated. First published online: 03 Nov 201

    High-Frequency Sea Level Variations Observed by GPS Buoys Using Precise Point Positioning Technique

    Full text link
    In this study, sea level variation observed by a 1-Hz Global Positioning System (GPS) buoy system is verified by comparing with tide gauge records and is decomposed to reveal high-frequency signals that cannot be detected from 6-minute tide gauge records. Compared to tide gauges traditionally used to monitor sea level changes and affected by land motion, GPS buoys provide high-frequency geocentric measurements of sea level variations. Data from five GPS buoy campaigns near a tide gauge at Anping, Tainan, Taiwan, were processed using the Precise Point Positioning (PPP) technique with four different satellite orbit products from the International GNSS Service (IGS). The GPS buoy data were also processed by a differential GPS (DGPS) method that needs an additional GPS receiver as a reference station and the accuracy of the solution depends on the baseline length. The computation shows the average Root Mean Square Error (RMSE) difference of the GPS buoy using DGPS and tide gauge records is around 3 - 5 cm. When using the aforementioned IGS orbit products for the buoy derived by PPP, its average RMSE differences are 5 - 8 cm, 8 - 13 cm, decimeter level, and decimeter-meter level, respectively, so the accuracy of the solution derived by PPP highly depends on the accuracy of IGS orbit products. Therefore, the result indicates that the accuracy of a GPS buoy using PPP has the potential to measure the sea surface variations to several cm. Finally, high-frequency sea level signals with periods of a few seconds to a day can be successfully detected in GPS buoy observations using the Ensemble Empirical Mode Decomposition (EMD) method and are identified as waves, meteotsunamis, and tides

    Mitigating Bias for Question Answering Models by Tracking Bias Influence

    Full text link
    Models of various NLP tasks have been shown to exhibit stereotypes, and the bias in the question answering (QA) models is especially harmful as the output answers might be directly consumed by the end users. There have been datasets to evaluate bias in QA models, while bias mitigation technique for the QA models is still under-explored. In this work, we propose BMBI, an approach to mitigate the bias of multiple-choice QA models. Based on the intuition that a model would lean to be more biased if it learns from a biased example, we measure the bias level of a query instance by observing its influence on another instance. If the influenced instance is more biased, we derive that the query instance is biased. We then use the bias level detected as an optimization objective to form a multi-task learning setting in addition to the original QA task. We further introduce a new bias evaluation metric to quantify bias in a comprehensive and sensitive way. We show that our method could be applied to multiple QA formulations across multiple bias categories. It can significantly reduce the bias level in all 9 bias categories in the BBQ dataset while maintaining comparable QA accuracy

    Granger causal connectivity dissociates navigation networks that subserve allocentric and egocentric path integration

    Get PDF
    Studies on spatial navigation demonstrate a significant role of the retrosplenial complex (RSC) in the transformation of egocentric and allocentric information into complementary spatial reference frames (SRFs). The tight anatomical connections of the RSC with a wide range of other cortical regions processing spatial information support its vital role within the human navigation network. To better understand how different areas of the navigational network interact, we investigated the dynamic causal interactions of brain regions involved in solving a virtual navigation task. EEG signals were decomposed by independent component analysis (ICA) and subsequently examined for information flow between clusters of independent components (ICs) using direct short-time directed transfer function (sdDTF). The results revealed information flow between the anterior cingulate cortex and the left prefrontal cortex in the theta (4-7 Hz) frequency band and between the prefrontal, motor, parietal, and occipital cortices as well as the RSC in the alpha (8-13 Hz) frequency band. When participants prefered to use distinct reference frames (egocentric vs. allocentric) during navigation was considered, a dominant occipito-parieto-RSC network was identified in allocentric navigators. These results are in line with the assumption that the RSC, parietal, and occipital cortices are involved in transforming egocentric visual-spatial information into an allocentric reference frame. Moreover, the RSC demonstrated the strongest causal flow during changes in orientation, suggesting that this structure directly provides information on heading changes in humans
    corecore